combining generative and discriminative model
Reviews: Combining Generative and Discriminative Models for Hybrid Inference
Overall this is a nice idea that works on using black box models to amortize the residuals from doing inference assuming a linearized approximation to the model. I found the experiments to be well organized albeit mostly on small scale/synthetic data. Summary: This paper introduces a procedure for combining graph neural networks with traditional methods for probabilistic inference (instantiated in HMMs). When we have linear dynamics in a HMM, inference is exact. For nonlinear dynamics, when we have access to the functional form of the true dynamics of the state space model, we can linearize the transition and emission functions (via a Taylor expansion) and represent them as matrices.
Combining Generative and Discriminative Models for Hybrid Inference
A graphical model is a structured representation of the data generating process. The traditional method to reason over random variables is to perform inference in this graphical model. However, in many cases the generating process is only a poor approximation of the much more complex true data generating process, leading to suboptimal estimation. The subtleties of the generative process are however captured in the data itself and we can learn to infer'', that is, learn a direct mapping from observations to explanatory latent variables. In this work we propose a hybrid model that combines graphical inference with a learned inverse model, which we structure as in a graph neural network, while the iterative algorithm as a whole is formulated as a recurrent neural network. By using cross-validation we can automatically balance the amount of work performed by graphical inference versus learned inference.
Combining Generative and Discriminative Models in NLP
Natural Language Processing (NLP) is an ever-evolving field of computer science that focuses on the interaction between humans and computers using natural language. NLP encompasses various tasks, such as text classification, sentiment analysis, machine translation, and language generation. Researchers have developed several models to address these tasks, with two primary models in NLP being generative and discriminative models. In this blog post, I'll explain how combining generative and discriminative models can lead to highly accurate and robust NLP systems. Generative models are probabilistic models that can generate new text based on the input given to them. These models are trained on large amounts of unlabeled data and can be fine-tuned to perform various NLP tasks.
Combining Generative and Discriminative Models for Hybrid Inference
Satorras, Victor Garcia, Akata, Zeynep, Welling, Max
A graphical model is a structured representation of the data generating process. The traditional method to reason over random variables is to perform inference in this graphical model. However, in many cases the generating process is only a poor approximation of the much more complex true data generating process, leading to suboptimal estimation. The subtleties of the generative process are however captured in the data itself and we can learn to infer'', that is, learn a direct mapping from observations to explanatory latent variables. In this work we propose a hybrid model that combines graphical inference with a learned inverse model, which we structure as in a graph neural network, while the iterative algorithm as a whole is formulated as a recurrent neural network.